pareto multi-task learning
Reviews: Pareto Multi-Task Learning
This paper mainly combines another MOO algorithm and MOO-MTL, and improves the results from last year NIPS paper Multi-objective MTL. The technical contribution for MOO and MTL is limited since this paper just borrow the MOO optimization method directly from reference [24] and reference [29]. Nevertheless, I think this paper has a potential impact in MTL community since I do not find any previous paper achieves similar effects, which could guide people get different high-quality MTL results without random trails. Quality: below the average The quality of the paper is below the bar. Some important part is missing and lack of deep analysis.
Reviews: Pareto Multi-Task Learning
The paper is extending the recent (NeurIPS 2018) work, which poses multi-task learning as multi-objective optimization. Although the submission is somewhat incremental, it is significant. Finding an arbitrary point on a Pareto efficiency curve is a significant limitation, and practitioners would rather find the entire Pareto efficiency curve. The submission overcomes this limitation. Moreover, empirical results support the claim and show the significance of the method.
Pareto Multi-Task Learning
Multi-task learning is a powerful method for solving multiple correlated tasks simultaneously. However, it is often impossible to find one single solution to optimize all the tasks, since different tasks might conflict with each other. Recently, a novel method is proposed to find one single Pareto optimal solution with good trade-off among different tasks by casting multi-task learning as multiobjective optimization. In this paper, we generalize this idea and propose a novel Pareto multi-task learning algorithm (Pareto MTL) to find a set of well-distributed Pareto solutions which can represent different trade-offs among different tasks. The proposed algorithm first formulates a multi-task learning problem as a multiobjective optimization problem, and then decomposes the multiobjective optimization problem into a set of constrained subproblems with different trade-off preferences.
Pareto Multi-Task Learning
Lin, Xi, Zhen, Hui-Ling, Li, Zhenhua, Zhang, Qing-Fu, Kwong, Sam
Multi-task learning is a powerful method for solving multiple correlated tasks simultaneously. However, it is often impossible to find one single solution to optimize all the tasks, since different tasks might conflict with each other. Recently, a novel method is proposed to find one single Pareto optimal solution with good trade-off among different tasks by casting multi-task learning as multiobjective optimization. In this paper, we generalize this idea and propose a novel Pareto multi-task learning algorithm (Pareto MTL) to find a set of well-distributed Pareto solutions which can represent different trade-offs among different tasks. The proposed algorithm first formulates a multi-task learning problem as a multiobjective optimization problem, and then decomposes the multiobjective optimization problem into a set of constrained subproblems with different trade-off preferences.